Autumn 2017
Link to my GitHub repository
https://github.com/aadomino/IODS-project
Our era of data - larger than ever and complex like chaos - requires several skills from statisticians and other data scientists. We must discover the patterns hidden behind numbers in matrices and arrays.
We are not afraid of
We want to
These are the core themes of Open Data Science and this course.
The objective of this week was learning, performing and interpreting the results of regression analysis. This part includes code, intrepretations and explanations of the results obtained with blood, sweat and tears.
The data used in this part comes from an international survey of approaches to learning - see more on this page.).
You can find the pre-processed data here: GitHub data repository.
The dataset, learning2014, consists of 166 observations and 7 variables - these are the dimensions of the data. You can see it here:
learning2014 <- read.table("C:/Users/P8Z77-V/Documents/learning2014.csv", header = TRUE, sep = "\t")
dim(learning2014)
## [1] 166 7
It is also possible to observe the data structure of the data frame:
str(learning2014)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: int 37 31 25 35 37 38 35 29 38 21 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
The variables occurring in the data are gender, age, attitude, deep, stra, surf, and points.
These variables are used to categorise the answers of students from the survey. The questions pertained to the students’ assessment of their deep-, strategic-, and surface learning, and the data was also collected on their age, gender, and attitude towards statistics.
The dataset does not contain the students who received 0 points from the final exam. These results have been filtered out:
learning2014 <- dplyr::filter(learning2014, points > 0)
The summary of the data includes a lot of information on all the variables. There are minimum, maximum, median and mean values of each category, and the first and the third quintiles of the data.
summary(learning2014)
## gender age attitude deep surf
## F:110 Min. :17.00 Min. :14.00 Min. :1.583 Min. :1.583
## M: 56 1st Qu.:21.00 1st Qu.:26.00 1st Qu.:3.333 1st Qu.:2.417
## Median :22.00 Median :32.00 Median :3.667 Median :2.833
## Mean :25.51 Mean :31.43 Mean :3.680 Mean :2.787
## 3rd Qu.:27.00 3rd Qu.:37.00 3rd Qu.:4.083 3rd Qu.:3.167
## Max. :55.00 Max. :50.00 Max. :4.917 Max. :4.333
## stra points
## Min. :1.250 Min. : 7.00
## 1st Qu.:2.625 1st Qu.:19.00
## Median :3.188 Median :23.00
## Mean :3.121 Mean :22.72
## 3rd Qu.:3.625 3rd Qu.:27.75
## Max. :5.000 Max. :33.00
An overview of this dataset produces a few interesting observations. We can clearly see that there are more female respondents than male ones, and the age variable represents typical age distribution of university students. Most of them are in their twenties, with a few outliers with the ages from 17 to 55. The attitude variable shows that the survey participants approach statistics with a slightly more positive than negative attitude. Deep learning is favoured more than strategic or surface learning (deep, stra and surf variables). The final exam results range from 7 to 33.
With a graphical plot we can actually visualise the variables and the relationships between them.
# Access the GGally and ggplot2 libraries.
library(GGally)
library(ggplot2)
# Read a plot matrix with ggpairs() into a variable p0. Draw.
p0 <- ggpairs(learning2014, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))
p0
It is a complex plot. One thing that stands out is that there is some positive correlation between exam points received by the students and their attitude towards statistics, but the correlation between exam points and other variables is generally weak. In general, the correlations between variables in our dataset are not strong.
This strong correlation makes sense: if a student has a positive outlook on statistics (as we all do), they are more likely to learn and obtain good results on the final test. It is possible to visualise this particular relation in more detail. Here, it is done by plotting the variables attitude and points as a scatterplot. The colour codes gender. Regression lines are also included:
# Access the ggplot2 library.
library(ggplot2)
# Draw the plot (p1) with our data. Define the mapping. Define the visualization type (dots) and smoothing. Add the plot title.
p1 <- ggplot(learning2014, aes(x = attitude, y = points, col = gender)) + geom_point() + geom_smooth(method = "lm") + ggtitle("Students' attitude towards statistics vs final exam points")
p1
In this part we will choose and fit a suitable regression model, which will explain the data in more detail - we want to find out which factors influence the amount of exam points received. Points is the target (dependent) variable. The previous section shows that attitude, stra and surf correlate most strongly with points. They will be our explanatory variables in this model, the summary of which is printed out below:
# Fit a regression model (m0) with multiple explanatory variables: attitude, stra, surf. Print a summary of the model.
m0 <- lm(points ~ attitude + stra + surf, data = learning2014)
summary(m0)
##
## Call:
## lm(formula = points ~ attitude + stra + surf, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.01711 3.68375 2.991 0.00322 **
## attitude 0.33952 0.05741 5.913 1.93e-08 ***
## stra 0.85313 0.54159 1.575 0.11716
## surf -0.58607 0.80138 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
The results show the variables used in the model.
Residuals are assumed to be normally distributed with zero mean and constant variance. The median is indeed close to zero and the residuals seem to follow normal distribution.
Coefficients show estimated influence of the explanatory variables on the target variable - the logistic probability of the outcome for a change by one unit in explanatory variable. In other words, here, if attitude increases by 1, the logistic odds of better test points increase by 0.33952. The more excited the students are about the subject, the better chances they have to pass the final with flying colours.
The summary also shows standard error, t- and p-values and indicates the significance values. The effect of attitude on the dependent variable (exam points) is statistically significant, while stra and surf are not (p value over .5) If an explanatory variable in the model does not have a statistically significant relationship with the target variable, we remove the variable from the model and fit the model again without it. In this summary, the residuals’ median has decreased and attitude is highly statistically significant.
# Create a regression model m1 with only attitude. Print a summary of the model.
m1 <- lm(points ~ attitude, data = learning2014)
summary(m1)
##
## Call:
## lm(formula = points ~ attitude, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.9763 -3.2119 0.4339 4.1534 10.6645
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.63715 1.83035 6.358 1.95e-09 ***
## attitude 0.35255 0.05674 6.214 4.12e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared: 0.1906, Adjusted R-squared: 0.1856
## F-statistic: 38.61 on 1 and 164 DF, p-value: 4.119e-09
The second summary indicates that the estimated effect of students’ attitude on exam results is 3.5255. Again, this means that for each unit increase in attitude, the exam results are also expected to increase.
The multiple R-squared value evaluates how much of the changes (variance) of the target variable is explained by the model. The rest of the variance is explained by some other factors that are not included. It could be understood as a goodness of fit measure.
The multiple R-squared is higher in the first model, even though the explanatory variables were shown to be statistically not significant and were subsequently dropped. The value increases when any variables are added to the model, irrespective of their significance.
Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage:
# Diagnostic plots using the plot() function. Choose the plots 1, 2 and 5.
par(mfrow = c(1,1))
plot(m1, which = c(1,2,5))
These plots allow us to assess if some of the assumptions we made about our linear regression model are correct.
The first plot, residuals vs fits, is a scatter plot of residuals on the y axis and fitted values (estimated responses) on the x axis. The plot is used to detect non-linearity, unequal error variances, and outliers - as simply explained here.
Our plot seems to show that residuals and the fitted values are uncorrelated, just as they should be in a linear model with normally distributed errors and constant variance. In other words, the scatter plot confirms our assumption about the error distribution and variance. Great.
The second plot is a Q-Q plot (quantile-quantile plot), which is used to assess if the target variable we took from our dataset really has the distribution we assumed in our model, which, for us, is a normal distribution. (A great source on interpreting this kind of plots can be found here).
A Q-Q plot is a scatterplot created by plotting two sets of quantiles against one another. If both sets of quantiles came from the same distribution, we should see the points forming a line that’s roughly straight.
Our plot indeed forms a straight line. Assumption confirmed.
The third plot, Residuals vs Leverage, allows us to see if the extreme values in the data influence the regression line, i.e. if the fact that we include them in our dataset influences the overall results.
The patterns in this plot are not really relevant. There are 2 things to look for: + outlying values at the upper right corner or at the lower right corner - values far away from the rest of the data points, + cases outside of the dashed red line (Cook’s distance).
In our plot, we have no influential cases. The Cook’s distance lines are not even visible, which means that all our data fits well within the lines. There are no extreme values. The plot is actually typical for the datasets with no influential cases.
Finally, some more reading I enjoyed on the subject of diagnostic plots.
The objective of this week was learning how to join together data from different sources for further analysis and analysing the results of logistic regression. This part includes code, intrepretations and explanations of the results. Definitely wasn’t easy-peasy.
Under these links my processed data in the .csv format and the script used to process the data can be found.
The data comes from The Machine Learning Repository at UCI.
> This data approach student achievement in secondary education of two Portuguese schools. The data attributes include student grades, demographic, social and school related features and it was collected by using school reports and questionnaires. Two datasets are provided regarding the performance in two distinct subjects: Mathematics (mat) and Portuguese (por). -source
The source data was merged; the variables not used for joining the two data have been combined by averaging (including the grade variables). Two new variables were included: alc_use, the average of ‘Dalc’ and ‘Walc’ (which describe alcohol use on weekdays and on weekends, respectively); and high_use, which is TRUE if alc_use is higher than 2 and FALSE if it is not.
alc <- read.table("C:\\Users\\P8Z77-V\\Documents\\GitHub\\IODS-project\\data\\alc.csv", header = TRUE, sep = ";")
str(alc)
## 'data.frame': 382 obs. of 35 variables:
## $ school : Factor w/ 2 levels "GP","MS": 1 1 1 1 1 1 1 1 1 1 ...
## $ sex : Factor w/ 2 levels "F","M": 1 1 1 1 1 2 2 1 2 2 ...
## $ age : int 18 17 15 15 16 16 16 17 15 15 ...
## $ address : Factor w/ 2 levels "R","U": 2 2 2 2 2 2 2 2 2 2 ...
## $ famsize : Factor w/ 2 levels "GT3","LE3": 1 1 2 1 1 2 2 1 2 1 ...
## $ Pstatus : Factor w/ 2 levels "A","T": 1 2 2 2 2 2 2 1 1 2 ...
## $ Medu : int 4 1 1 4 3 4 2 4 3 3 ...
## $ Fedu : int 4 1 1 2 3 3 2 4 2 4 ...
## $ Mjob : Factor w/ 5 levels "at_home","health",..: 1 1 1 2 3 4 3 3 4 3 ...
## $ Fjob : Factor w/ 5 levels "at_home","health",..: 5 3 3 4 3 3 3 5 3 3 ...
## $ reason : Factor w/ 4 levels "course","home",..: 1 1 3 2 2 4 2 2 2 2 ...
## $ nursery : Factor w/ 2 levels "no","yes": 2 1 2 2 2 2 2 2 2 2 ...
## $ internet : Factor w/ 2 levels "no","yes": 1 2 2 2 1 2 2 1 2 2 ...
## $ guardian : Factor w/ 3 levels "father","mother",..: 2 1 2 2 1 2 2 2 2 2 ...
## $ traveltime: int 2 1 1 1 1 1 1 2 1 1 ...
## $ studytime : int 2 2 2 3 2 2 2 2 2 2 ...
## $ failures : int 0 0 2 0 0 0 0 0 0 0 ...
## $ schoolsup : Factor w/ 2 levels "no","yes": 2 1 2 1 1 1 1 2 1 1 ...
## $ famsup : Factor w/ 2 levels "no","yes": 1 2 1 2 2 2 1 2 2 2 ...
## $ paid : Factor w/ 2 levels "no","yes": 1 1 2 2 2 2 1 1 2 2 ...
## $ activities: Factor w/ 2 levels "no","yes": 1 1 1 2 1 2 1 1 1 2 ...
## $ higher : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 2 2 ...
## $ romantic : Factor w/ 2 levels "no","yes": 1 1 1 2 1 1 1 1 1 1 ...
## $ famrel : int 4 5 4 3 4 5 4 4 4 5 ...
## $ freetime : int 3 3 3 2 3 4 4 1 2 5 ...
## $ goout : int 4 3 2 2 2 2 4 4 2 1 ...
## $ Dalc : int 1 1 2 1 1 1 1 1 1 1 ...
## $ Walc : int 1 1 3 1 2 2 1 1 1 1 ...
## $ health : int 3 3 3 5 5 5 3 1 1 5 ...
## $ absences : int 5 3 8 1 2 8 0 4 0 0 ...
## $ G1 : int 2 7 10 14 8 14 12 8 16 13 ...
## $ G2 : int 8 8 10 14 12 14 12 9 17 14 ...
## $ G3 : int 8 8 11 14 12 14 12 10 18 14 ...
## $ alc_use : num 1 1 2.5 1 1.5 1.5 1 1 1 1 ...
## $ high_use : logi FALSE FALSE TRUE FALSE FALSE FALSE ...
The data includes 382 observations and 35 variables. The attributes have to do with the students’ background, learning outcomes, family, health, and free time activities.
It is reasonable to assume that some variables are positively correlated with some others. Judging by the DataCamp exercises completed earlier, sex, absences and failures should be correlated with alcohol consumption. I have also chosen freetime as a variable with potential to correlate with it - if the students have more free time, they are more likely to consume alcohol. Even though they shouldn’t and they know it.
So, the hypotheses - each of them assuming positive correlation - are the following:
In this part, let’s numerically and graphically explore the distributions of the chosen variables and their relationships with alcohol consumption, and see if our hypotheses hold up.
First, let’s access the libraries needed in this part.
# Access the libraries needed in this section.
library(dplyr)
##
## Attaching package: 'dplyr'
## The following object is masked from 'package:GGally':
##
## nasa
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
library(tidyr)
library(ggplot2)
library(boot)
Next, it’s good to have a good look at all the variables in a graphical form. This way it is easier to see the tendencies and the distribution of data.
The bar plots show that the distribution of males and females is very balanced, with slightly fewer male students.
Most of the students haven’t failed their classes, but there are some who have had one or more failures.
Free time is interestingly distributed: the students had to assess the amount of free time on a scale 1-5. Most of them said they have an average amount of free time, but there were more of those who said that they had quite a lot or very much of it.
The majority of students have less than 10 absences, but there are some outliers.
# The chosen variables: sex, failures, free time and absences.
ggplot(data = alc, aes(x = sex)) + geom_bar()
ggplot(data = alc, aes(x = failures)) + geom_bar()
ggplot(data = alc, aes(x = freetime)) + geom_bar()
ggplot(data = alc, aes(x = absences)) + geom_bar()
Alcohol consumption is generally low. The students who have admitted using a lot of alcohol constitute less than 1/3 of all students.
# Alcohol use and hign use.
ggplot(data = alc, aes(x = alc_use)) + geom_bar()
ggplot(data = alc, aes(x = high_use)) + geom_bar()
What if we combine some of the variables? Let’s see if we can get an idea about some of the assumptions made earlier.
The relation of hing alcohol use with gender seems to confirm the hypothesis - there are more males who are heavy drinkers than females.
g0 <- ggplot(data = alc, aes(x = high_use))
g0 + geom_bar() + facet_wrap("sex")
What about failures? The numbers are low, so it is difficult to assess from the bar plots, but it seems that at least for the group of students with the highest number of failures, the number of heavy drinkers surpasses the number of those who are not. Heavy drinking is a more prominent factor in groups with 1 or 2 failures than it is in the group with 0 failures.
g1 <- ggplot(data = alc, aes(x = high_use))
g1 + geom_bar() + facet_wrap("failures")
In the following plots, high_use is the target variable. For the colour visualisation, sex is used, and absences, failures and freetime are the explanatory variables.
g2 <- ggplot(alc, aes(x = high_use, col = sex, y = absences))
g2 + geom_boxplot() + ylab("absences")
g3 <- ggplot(alc, aes(x = high_use, col = sex, y = failures))
g3 + geom_boxplot() + ylab("failures")
g4 <- ggplot(alc, aes(x = high_use, col = sex, y = freetime))
g4 + geom_boxplot() + ylab("free time")
This visualisation also seems to confirm the first hypothesis - high alcohol consumption is more likely associated with male sex.
Cross tabulation lets us compare the relationship between any two variables.
Here, alc_use is the variable of interest. The other variables taken into account show definitely an upward tendency. This is consistent with our hypotheses.
alc %>% group_by(alc_use, sex) %>% summarise(count = n(), mean_absences = mean(absences), mean_failures = mean(failures), mean_freetime = mean(freetime))
## # A tibble: 17 x 6
## # Groups: alc_use [?]
## alc_use sex count mean_absences mean_failures mean_freetime
## <dbl> <fctr> <int> <dbl> <dbl> <dbl>
## 1 1.0 F 87 3.781609 0.10344828 2.885057
## 2 1.0 M 53 2.660377 0.11320755 3.603774
## 3 1.5 F 42 4.642857 0.04761905 2.833333
## 4 1.5 M 27 3.592593 0.29629630 3.111111
## 5 2.0 F 27 5.000000 0.25925926 3.222222
## 6 2.0 M 32 3.000000 0.18750000 3.281250
## 7 2.5 F 26 6.576923 0.19230769 3.307692
## 8 2.5 M 18 6.222222 0.05555556 3.222222
## 9 3.0 F 11 7.636364 0.54545455 3.272727
## 10 3.0 M 21 5.285714 0.57142857 3.428571
## 11 3.5 F 3 8.000000 0.33333333 3.666667
## 12 3.5 M 14 5.142857 0.50000000 3.714286
## 13 4.0 F 1 3.000000 0.00000000 3.000000
## 14 4.0 M 8 6.375000 0.25000000 3.500000
## 15 4.5 M 3 12.000000 0.00000000 3.333333
## 16 5.0 F 1 3.000000 0.00000000 5.000000
## 17 5.0 M 8 7.375000 0.62500000 4.000000
If high_use is the variable of interest, the tendency is also pronounced. Especially the hypothesis of heavy drinking correlating with absences stands out.
alc %>% group_by(high_use, sex) %>% summarise(count = n(), mean_absences = mean(absences), mean_failures = mean(failures), mean_freetime = mean(freetime))
## # A tibble: 4 x 6
## # Groups: high_use [?]
## high_use sex count mean_absences mean_failures mean_freetime
## <lgl> <fctr> <int> <dbl> <dbl> <dbl>
## 1 FALSE F 156 4.224359 0.1153846 2.929487
## 2 FALSE M 112 2.982143 0.1785714 3.392857
## 3 TRUE F 42 6.785714 0.2857143 3.357143
## 4 TRUE M 72 6.125000 0.3750000 3.500000
In this part, we use logistic regression to statistically explore the relationship between the chosen variables and the binary high/low alcohol consumption variable as the target variable.
# Find the model with glm()
m0 <- glm(high_use ~ sex + absences + failures + freetime, data = alc, family = "binomial")
# Present a summary of the fitted model.
summary(m0)
##
## Call:
## glm(formula = high_use ~ sex + absences + failures + freetime,
## family = "binomial", data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.9664 -0.8170 -0.6065 1.0585 2.0330
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -2.75479 0.46053 -5.982 2.21e-09 ***
## sexM 0.84223 0.24705 3.409 0.000652 ***
## absences 0.09461 0.02280 4.150 3.33e-05 ***
## failures 0.43004 0.19312 2.227 0.025961 *
## freetime 0.27453 0.12543 2.189 0.028621 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 419.50 on 377 degrees of freedom
## AIC: 429.5
##
## Number of Fisher Scoring iterations: 4
The results here are quite consistent with the previous observations.
The correlation between male sex and high alcohol consumption and then the correlation between absences and high alcohol consumption are very higly significant. By contrast, failures and free time are not.
Finally, it is time to present and interpret the coefficients of the model as odds ratios and deliver confidence intervals for them. Interpret the results and compare them to your previously stated hypothesis.
# Compute odds ratios (OR).
OR1 <- coef(m0) %>% exp
# Compute confidence intervals (CI).
CI1 <- confint(m0) %>% exp
## Waiting for profiling to be done...
# Print the odds ratios along with their confidence intervals.
cbind(OR1, CI1)
## OR1 2.5 % 97.5 %
## (Intercept) 0.06362265 0.02504274 0.1528626
## sexM 2.32154106 1.43694673 3.7924512
## absences 1.09922964 1.05343261 1.1522505
## failures 1.53731413 1.05427814 2.2602725
## freetime 1.31591429 1.03172262 1.6889201
I had to remind this to myself: >An odds ratio (OR) is a measure of association between an exposure and an outcome. The OR represents the odds that an outcome will occur given a particular exposure, compared to the odds of the outcome occurring in the absence of that exposure. - source
So, in this case, the odds ratio for variable sexM means the odds of a male using alcohol heavily to the odds of a female using alcohol heavily. The odd ratio is ca. 2.32 - this means that in our data, a boy is 2.32 times more likely to drink alcohol in huge amounts than a girl.
All OR results are more than 1, which means a positive correlation with high_use for all the variables that were chosen at the beginning.
Here, using the variables which, according to the logistic regression model above, had a statistical relationship with high/low alcohol consumption, we will explore the predictive power of the model. Only sex and absences had a high statistical significance, so let’s keep these.
m1 <- glm(high_use ~ sex + absences, data = alc, family = "binomial")
summary(m1)
##
## Call:
## glm(formula = high_use ~ sex + absences, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.2753 -0.8753 -0.6081 1.0921 1.9920
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.83606 0.22251 -8.252 < 2e-16 ***
## sexM 0.97762 0.23982 4.076 4.57e-05 ***
## absences 0.09659 0.02306 4.189 2.80e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 430.07 on 379 degrees of freedom
## AIC: 436.07
##
## Number of Fisher Scoring iterations: 4
Below is a table of predictions compared to the actual values of the variable high_use in our new model. Two columns are added: probability including the predicted probabilities and prediction, which has value TRUE if the value of “probability” is larger than 0.5.
probabilities <- predict(m1, type = "response")
# Add the predicted probabilities to 'alc'.
alc <- mutate(alc, probability = probabilities)
# Use the probabilities to make a prediction of high_use.
alc <- mutate(alc, prediction = probability>0.5)
# Tabulate the target variable versus the predictions.
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 258 10
## TRUE 88 26
The model is accurate in (258 + 26) cases. However, it predicted the respondent to be a heavy drinker in 10 cases, when in fact the user was not, and predicted them NOT to be a heavy drinker when they were as much as 88 times.
So, what is the error in the prediction?
# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2565445
The training error seems to be ca. 25%.
A lot of fun, surely.
Tasks 1-3.
First, access the necessary libraries.
# Access the needed libraries:
library(dplyr)
library(tidyr)
library(ggplot2)
library(boot)
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
library(tidyverse)
## -- Attaching packages -------------------------------------- tidyverse 1.2.1 --
## <U+221A> tibble 1.3.4 <U+221A> purrr 0.2.4
## <U+221A> readr 1.1.1 <U+221A> stringr 1.2.0
## <U+221A> tibble 1.3.4 <U+221A> forcats 0.2.0
## -- Conflicts ----------------------------------------- tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()
## x MASS::select() masks dplyr::select()
library(corrplot)
## corrplot 0.84 loaded
Let’s load the Boston data from the MASS package and explore the structure and the dimensions of the data and describe the dataset.
# load the data
data("Boston")
# explore the dataset
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
The Boston data frame has 506 rows and 14 columns. It describes housing values in the suburbs of Boston.
What are the variables in the data?
colnames(Boston)
## [1] "crim" "zn" "indus" "chas" "nox" "rm" "age"
## [8] "dis" "rad" "tax" "ptratio" "black" "lstat" "medv"
The descriptions of the variables are available here. They concern such things as per capita crime rate by town, average number of rooms per dwelling, or even pupil-teacher ratio by town.
Now let’s have a look at a graphical overview of the data and show summaries of the variables in the data.
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
From the summary of the variables we can see minimum, maximum, median and mean values as well as the 1st and 3rd quartiles of the variables.
The correlations between the different variables can be studied with the help of a correlations matrix and a correlations plot.
# First calculate the correlation matrix and round it so that it includes only two digits:
cor_matrix<-cor(Boston) %>% round(digits = 2)
# Print the correlation matrix:
cor_matrix
## crim zn indus chas nox rm age dis rad tax
## crim 1.00 -0.20 0.41 -0.06 0.42 -0.22 0.35 -0.38 0.63 0.58
## zn -0.20 1.00 -0.53 -0.04 -0.52 0.31 -0.57 0.66 -0.31 -0.31
## indus 0.41 -0.53 1.00 0.06 0.76 -0.39 0.64 -0.71 0.60 0.72
## chas -0.06 -0.04 0.06 1.00 0.09 0.09 0.09 -0.10 -0.01 -0.04
## nox 0.42 -0.52 0.76 0.09 1.00 -0.30 0.73 -0.77 0.61 0.67
## rm -0.22 0.31 -0.39 0.09 -0.30 1.00 -0.24 0.21 -0.21 -0.29
## age 0.35 -0.57 0.64 0.09 0.73 -0.24 1.00 -0.75 0.46 0.51
## dis -0.38 0.66 -0.71 -0.10 -0.77 0.21 -0.75 1.00 -0.49 -0.53
## rad 0.63 -0.31 0.60 -0.01 0.61 -0.21 0.46 -0.49 1.00 0.91
## tax 0.58 -0.31 0.72 -0.04 0.67 -0.29 0.51 -0.53 0.91 1.00
## ptratio 0.29 -0.39 0.38 -0.12 0.19 -0.36 0.26 -0.23 0.46 0.46
## black -0.39 0.18 -0.36 0.05 -0.38 0.13 -0.27 0.29 -0.44 -0.44
## lstat 0.46 -0.41 0.60 -0.05 0.59 -0.61 0.60 -0.50 0.49 0.54
## medv -0.39 0.36 -0.48 0.18 -0.43 0.70 -0.38 0.25 -0.38 -0.47
## ptratio black lstat medv
## crim 0.29 -0.39 0.46 -0.39
## zn -0.39 0.18 -0.41 0.36
## indus 0.38 -0.36 0.60 -0.48
## chas -0.12 0.05 -0.05 0.18
## nox 0.19 -0.38 0.59 -0.43
## rm -0.36 0.13 -0.61 0.70
## age 0.26 -0.27 0.60 -0.38
## dis -0.23 0.29 -0.50 0.25
## rad 0.46 -0.44 0.49 -0.38
## tax 0.46 -0.44 0.54 -0.47
## ptratio 1.00 -0.18 0.37 -0.51
## black -0.18 1.00 -0.37 0.33
## lstat 0.37 -0.37 1.00 -0.74
## medv -0.51 0.33 -0.74 1.00
# Visualize the correlation matrix with a correlations plot:
corrplot(cor_matrix, method="circle", type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)
From the plot above we can easily see which variables correlate with which and is that correlation positive (blue) or negative (red). Some observations:
Task 4
In this part, we are performing the following: * Standardize the dataset and print out summaries of the scaled data. * Create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate). * Use the quantiles as the break points in the categorical variable. * Drop the old crime rate variable from the dataset. * Divide the dataset to train and test sets, so that 80% of the data belongs to the train set.
Let’s standardize the dataset and print out summaries of the scaled data for the later classification and clustering analysis. How did the variables change?
# center and standardize variables
boston_scaled <- scale(Boston)
# summaries of the scaled variables
summary(boston_scaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
The variables are more similar in scale and weight, which makes them easier to compare and estimate. They also all have mean zero.
Create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate). This variable shows the quantiles of the scaled crime rate and is now used instead of the previous continuous one.
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
# summary of the scaled crime rate
summary(boston_scaled$crim)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -0.419367 -0.410563 -0.390280 0.000000 0.007389 9.924110
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label = c("low", "med_low", "med_high", "high"))
# look at the table of the new factor crime
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
Let’s drop the old crime rate variable from the dataset and replace it with the new categorical variable for crime rates - for clarity:
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
Finally, the last step. 80 % of the data will become the training (train) set and the 20 % the test set. The actual predictions of new data are done with the test set.
# number of rows in the Boston dataset
n <- nrow(boston_scaled)
# choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# create train set
train <- boston_scaled[ind,]
# create test set
test <- boston_scaled[-ind,]
Tasks 5 and 6
Now let’s fit the linear discriminant analysis on the train set. LDA is a generalization of Fisher’s linear discriminant, a method used in statistics, pattern recognition and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events (as explained by everyone’s fav source).
We will use the categorical crime rate as the target variable and all the other variables in the dataset as predictor variables.
# linear discriminant analysis
lda.fit <- lda(crime ~., data = train)
# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2648515 0.2599010 0.2425743 0.2326733
##
## Group means:
## zn indus chas nox rm
## low 0.92930820 -0.8785893 -0.12514775 -0.8835188 0.40158429
## med_low -0.09195382 -0.2604916 0.02764047 -0.5520584 -0.16168950
## med_high -0.38923530 0.1723385 0.29011382 0.3885945 0.07573478
## high -0.48724019 1.0172896 -0.02102480 1.0789684 -0.36125773
## age dis rad tax ptratio
## low -0.9006903 0.8839428 -0.6985111 -0.7552831 -0.417623316
## med_low -0.3574414 0.3471098 -0.5476413 -0.4798661 -0.002336043
## med_high 0.3756123 -0.3432721 -0.4263882 -0.3304693 -0.284883883
## high 0.8233428 -0.8609524 1.6363892 1.5128120 0.778752050
## black lstat medv
## low 0.37686539 -0.74163398 0.5129507
## med_low 0.35677453 -0.12703412 -0.0227242
## med_high 0.06945703 0.01947604 0.1771687
## high -0.80993335 0.85192361 -0.6543269
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.12018396 0.63950054 -0.84997919
## indus 0.01977607 -0.09775248 0.41166225
## chas -0.08680751 -0.08801882 0.12325932
## nox 0.30725784 -0.88693126 -1.32219192
## rm -0.11826887 -0.05415764 -0.11242234
## age 0.30148769 -0.31534271 -0.18175074
## dis -0.07935844 -0.27671205 0.06823998
## rad 3.40498754 1.00512732 -0.12011639
## tax 0.02956068 -0.04856418 0.55070338
## ptratio 0.10357974 -0.03294491 -0.17435530
## black -0.13797385 0.08057081 0.25403055
## lstat 0.20608473 -0.17771654 0.33194919
## medv 0.19995086 -0.38288567 -0.28132349
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9549 0.0335 0.0117
The LDA calculates the probability of a new observation being classified as belonging to each class on the basis of the trained model, and assigns every observation to the most probable class.
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 4)
Biplot is a visualisation chart that allows that allows us to clearly see some of the most outstanding or clear predictor vairables. It is clearly visible that accessibility to radial highways - rad - is the variable that is the most telling.
In order to assess the performance of the model in predicting the crime rate, let’s save the crime categories from the test set and then remove the categorical crime variable from the test dataset…
# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
…and then predict the classes with the LDA model on the test data with the predict() function, and cross tabulate the results with the crime categories from the test set:
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 15 5 0 0
## med_low 3 15 3 0
## med_high 0 6 20 2
## high 0 0 0 33
The corss tabulation of the results tells us that the model predicts crime rate in the suburbs correctly (which is to be expected, since it was such a telling feature previously); the model has some problems in separating med_low from low, but overall it performs really well.
Task 7
It’s time for data clustering. Let’s reload the Boston dataset and standardize it.
# center and standardize variables
boston_scaled <- scale(Boston)
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
The next step is to calculate the (Euclidean) distances between the observations, and to do that we’ll use a Euclidean distance matrix:
# euclidean distance matrix
dist_eu <- dist(Boston)
# look at the summary of the distances
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.119 85.624 170.539 226.315 371.950 626.047
Now let’s perform the K-means clustering with K=3 and have a look at the plot (the last 5 columns):
# k-means clustering
km <-kmeans(Boston, centers = 3)
# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)
But is it optimal? How do we know what the optimal amount of clusters is?
Let’s take the within cluster sum of squares (WCSS) and look at the changes in it depending on the number of clusters. The optimal number of clusters shows as a sharp drop in total WCSS.
set.seed(123)
# determine the number of clusters
k_max <- 10
# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss})
# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')
The optimal number of cluster seems to be 2, so let’s use that:
# k-means clustering
km <-kmeans(Boston, centers = 2)
# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)
We can also have a look at other columns:
pairs(Boston[7:14], col = km$cluster)
Again it looks like the same variables as before are the most distinctive: access to highways and property tax.
Actually the super-bonus exercise, because it’s worth more points.
Run the code below for the (scaled) train data that you used to fit the LDA. The code creates a matrix product, which is a projection of the data points.
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
Next, install and access the plotly package. Create a 3D plot (Cool!) of the columns of the matrix product by typing the code below.
# access the needed libraries:
library(plotly)
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers')
Adjust the code: add argument color as a argument in the plot_ly() function. Set the color to be the crime classes of the train set.
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$crime)
Draw another 3D plot where the color is defined by the clusters of the k-means.
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = km$centers)
Hmm. This is difficult to interpret?